New slurm customization parameters (account, containers)#1209
Conversation
Signed-off-by: Igor Gitman <igitman@nvidia.com>
Signed-off-by: Igor Gitman <igitman@nvidia.com>
|
Note Reviews pausedIt looks like this branch is under active development. To avoid overwhelming you with review comments due to an influx of new commits, CodeRabbit has automatically paused this review. You can configure this behavior by changing the Use the following commands to manage reviews:
Use the checkboxes below for quick actions:
📝 WalkthroughWalkthroughAdds optional Slurm account and per-component container-image override CLI options across multiple pipeline commands and threads them through task creation, HardwareConfig, executor creation, and sbatch submission so tasks run with specified account/container choices while falling back to existing defaults. Changes
Sequence Diagram(s)sequenceDiagram
participant CLI as "User CLI"
participant Pipeline as "Pipeline (convert/generate/eval/run_cmd/start_server)"
participant AddTask as "add_task"
participant Executor as "get_executor"
participant Slurm as "Slurm/sbatch"
CLI->>Pipeline: invoke command with account & container overrides
Pipeline->>AddTask: build task params (container = override or default, account)
AddTask->>Executor: request executor(container, account, hardware...)
Executor->>Slurm: submit job (sbatch kwargs include account)
Slurm-->>Executor: job id
Executor-->>AddTask: executor handle
AddTask-->>Pipeline: task registered
Pipeline-->>CLI: return task info
Estimated code review effort🎯 4 (Complex) | ⏱️ ~45 minutes Possibly related PRs
Suggested reviewers
🚥 Pre-merge checks | ✅ 3✅ Passed checks (3 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches🧪 Generate unit tests (beta)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
There was a problem hiding this comment.
Actionable comments posted: 0
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (2)
nemo_skills/pipeline/nemo_evaluator.py (1)
560-590:⚠️ Potential issue | 🟡 MinorDon’t silently ignore
exclusive.The parameter is accepted and threaded through, but never applied. Either honor it via
sbatch_kwargsor fail fast when it’s set so users don’t think they’re getting exclusive nodes.Suggested fail-fast guard
def _hardware_for_group( partition: Optional[str], account: Optional[str], num_gpus: Optional[int], num_nodes: int, qos: Optional[str], exclusive: bool, ) -> HardwareConfig: + if exclusive: + raise ValueError("exclusive is not supported for nemo_evaluator jobs yet; remove --exclusive.") return HardwareConfig( partition=partition, account=account, num_gpus=num_gpus, num_nodes=num_nodes,As per coding guidelines, avoid silently ignoring unused user-passed parameters. The code should fail if a user specifies an unsupported argument or if a required argument is not provided.
nemo_skills/pipeline/eval.py (1)
816-866:⚠️ Potential issue | 🟠 MajorAccount override is missing for summarize/compute-score tasks.
When a user specifies--account, these Slurm tasks still run under the default account and can fail on clusters without a default. Please propagateaccount=accountin bothadd_taskcalls.🔧 Proposed fix
summarize_task = pipeline_utils.add_task( exp, cmd=command, task_name=f"{expname}-{benchmark}-summarize-results", log_dir=f"{output_dir}/{benchmark_args.eval_subfolder}/summarized-results", container=cluster_config["containers"]["nemo-skills"], cluster_config=cluster_config, + account=account, run_after=run_after, reuse_code_exp=reuse_code_exp, reuse_code=reuse_code, task_dependencies=( dependent_tasks if cluster_config["executor"] == "slurm" else all_tasks + _task_dependencies ), installation_command=installation_command, skip_hf_home_check=skip_hf_home_check, sbatch_kwargs=sbatch_kwargs, ) @@ score_task = pipeline_utils.add_task( exp, cmd=command, task_name=f"{expname}-{group}-compute-score", log_dir=f"{output_dir}/eval-results/{group}/compute-score-logs", container=cluster_config["containers"]["nemo-skills"], cluster_config=cluster_config, + account=account, run_after=run_after, reuse_code_exp=reuse_code_exp, reuse_code=reuse_code, task_dependencies=( group_tasks[group] if cluster_config["executor"] == "slurm" else all_tasks + _task_dependencies ), installation_command=installation_command, skip_hf_home_check=skip_hf_home_check, sbatch_kwargs=sbatch_kwargs, )As per coding guidelines, Avoid silently ignoring unused user-passed parameters. The code should fail if a user specifies an unsupported argument or if a required argument is not provided. Use dataclasses or **kwargs syntax to handle this automatically.
gwarmstrong
left a comment
There was a problem hiding this comment.
In general looks good. Have a minor comment about goals for the future with this, but I don't think it requires action.
| main_container: str = typer.Option(None, help="Override container image for the main evaluation client"), | ||
| sandbox_container: str = typer.Option(None, help="Override container image for the sandbox"), | ||
| judge_container: str = typer.Option(None, help="Override container image for GPU-based judges (comet, nvembed)"), |
There was a problem hiding this comment.
I think it's a little bulky to have separate override arguments for each container everywhere. Not sure that there is a better solution though. If we wanted to have overrides like we do for tools, e.g.,
++container_overrides.sandbox = "..."
++container_overrides.judge = "..."
But then the choice of key is unclear--since our "job components", e.g., Judge, main, sandbox, ... don't map cleanly to a container name (e.g., "judge" -> containers[judge_server_type], main -> containers["nemo-skills"], sandbox -> containers["sandbox"]).
So I think with the current structure, what you've done the best choice, but maybe we can eventually work toward something a little more general here.
Signed-off-by: Igor Gitman <igitman@nvidia.com>
Signed-off-by: Igor Gitman <igitman@nvidia.com>
Signed-off-by: George Armstrong <georgea@nvidia.com> # Conflicts: # nemo_skills/pipeline/start_server.py
There was a problem hiding this comment.
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (1)
nemo_skills/pipeline/eval.py (1)
807-858:⚠️ Potential issue | 🟠 Major
accountnot forwarded to summarize/score tasks — can cause job rejection on enforced-accounting clusters.The
accountparameter accepted byevalis propagated to main eval tasks and judge tasks, but thesummarize_resultstask (line 807) andcompute_group_scoretask (line 841) silently omit it. On Slurm clusters that require account specification for every job submission, these tasks will fail or be billed to the wrong account.🐛 Proposed fix
summarize_task = pipeline_utils.add_task( exp, cmd=command, task_name=f"{expname}-{benchmark}-summarize-results", log_dir=f"{output_dir}/{benchmark_args.eval_subfolder}/summarized-results", container=cluster_config["containers"]["nemo-skills"], cluster_config=cluster_config, + account=account, run_after=run_after,score_task = pipeline_utils.add_task( exp, cmd=command, task_name=f"{expname}-{group}-compute-score", log_dir=f"{output_dir}/eval-results/{group}/compute-score-logs", container=cluster_config["containers"]["nemo-skills"], cluster_config=cluster_config, + account=account, run_after=run_after,🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@nemo_skills/pipeline/eval.py` around lines 807 - 858, The summarize_results and compute_group_score tasks omit the account setting; when creating summarize_task and score_task via pipeline_utils.add_task (the calls that create summarize_task and score_task), forward the account parameter (e.g., account=account) so the job runs under the correct Slurm account; update both add_task invocations (the summarize_task and score_task calls) to include account=account (or propagate account from the surrounding eval function/args) and ensure any sbatch_kwargs/account merging logic remains consistent.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Outside diff comments:
In `@nemo_skills/pipeline/eval.py`:
- Around line 807-858: The summarize_results and compute_group_score tasks omit
the account setting; when creating summarize_task and score_task via
pipeline_utils.add_task (the calls that create summarize_task and score_task),
forward the account parameter (e.g., account=account) so the job runs under the
correct Slurm account; update both add_task invocations (the summarize_task and
score_task calls) to include account=account (or propagate account from the
surrounding eval function/args) and ensure any sbatch_kwargs/account merging
logic remains consistent.
ℹ️ Review info
Configuration used: Path: .coderabbit.yaml
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (3)
nemo_skills/pipeline/eval.pynemo_skills/pipeline/start_server.pynemo_skills/pipeline/utils/exp.py
🚧 Files skipped from review as they are similar to previous changes (1)
- nemo_skills/pipeline/utils/exp.py
Signed-off-by: George Armstrong <georgea@nvidia.com> # Conflicts: # nemo_skills/pipeline/eval.py
There was a problem hiding this comment.
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (1)
nemo_skills/pipeline/eval.py (1)
623-674:⚠️ Potential issue | 🟠 Major
accountis silently dropped forsummarize_resultsandcompute_group_scoretasks.Both
add_taskcalls (lines 623 and 657) are missingaccount=account, so a user-specified Slurm account is ignored for these tasks while being respected everywhere else. As per coding guidelines, "avoid silently ignoring user-passed parameters."🐛 Proposed fix
summarize_task = pipeline_utils.add_task( exp, cmd=command, task_name=f"{expname}-{benchmark}-summarize-results", log_dir=f"{output_dir}/{benchmark_args.eval_subfolder}/summarized-results", container=cluster_config["containers"]["nemo-skills"], cluster_config=cluster_config, + account=account, run_after=run_after, ... )score_task = pipeline_utils.add_task( exp, cmd=command, task_name=f"{expname}-{group}-compute-score", log_dir=f"{output_dir}/eval-results/{group}/compute-score-logs", container=cluster_config["containers"]["nemo-skills"], cluster_config=cluster_config, + account=account, run_after=run_after, ... )🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@nemo_skills/pipeline/eval.py` around lines 623 - 674, The summarize_results and compute_group_score tasks created via pipeline_utils.add_task (referenced as summarize_task and score_task) are missing the account parameter so a user-specified Slurm account is ignored; fix by passing account=account into both add_task calls that create summarize_task and score_task (the two pipeline_utils.add_task invocations building the summarize-results and compute-score tasks) so the Slurm account is honored like other tasks.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Outside diff comments:
In `@nemo_skills/pipeline/eval.py`:
- Around line 623-674: The summarize_results and compute_group_score tasks
created via pipeline_utils.add_task (referenced as summarize_task and
score_task) are missing the account parameter so a user-specified Slurm account
is ignored; fix by passing account=account into both add_task calls that create
summarize_task and score_task (the two pipeline_utils.add_task invocations
building the summarize-results and compute-score tasks) so the Slurm account is
honored like other tasks.
The test calls _create_job_unified() which now requires account as a positional argument after the addition of the --account CLI option. Signed-off-by: George Armstrong <georgea@nvidia.com>
There was a problem hiding this comment.
🧹 Nitpick comments (1)
tests/test_generation.py (1)
176-188: Exercise the non-defaultaccountpath in this test.Line 184 currently passes
account=None, so this only validates the default path and does not verify that user-provided account values are threaded into the generated job metadata/command. Consider using a sentinel account (e.g.,"test-account") and asserting it propagates to the expected output object/args.As per coding guidelines, "Avoid silently ignoring user-passed parameters; fail if a required parameter is not specified or an unsupported parameter is provided."
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@tests/test_generation.py` around lines 176 - 188, The test currently only exercises the default account path because _create_job_unified is called with account=None; change the test to pass a sentinel account string (e.g., "test-account") to the account parameter when calling _create_job_unified and add an assertion that this value is propagated into the returned job metadata/command (inspect the output object(s) in the test and assert the account field or command arg equals "test-account"); update any related assertions that assumed None/default to reflect the explicit account so the non-default path is validated.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Nitpick comments:
In `@tests/test_generation.py`:
- Around line 176-188: The test currently only exercises the default account
path because _create_job_unified is called with account=None; change the test to
pass a sentinel account string (e.g., "test-account") to the account parameter
when calling _create_job_unified and add an assertion that this value is
propagated into the returned job metadata/command (inspect the output object(s)
in the test and assert the account field or command arg equals "test-account");
update any related assertions that assumed None/default to reflect the explicit
account so the non-default path is validated.
…Skills into igitman/account-arg
Signed-off-by: Igor Gitman <igitman@nvidia.com>
Signed-off-by: Igor Gitman <igitman@nvidia.com>
Per-request inference timeout (120s) and pytest-level test timeout (300s) for test_eval_gsm8k_api and test_eval_judge_api. Prevents external API hangs from blocking CI for 8+ minutes. Signed-off-by: George Armstrong <georgea@nvidia.com>
The judge step in test_eval_judge_api runs as a separate nemo-run job and doesn't inherit ++inference.timeout from the main generation step. Pass it via --extra_judge_args to prevent judge hangs too. Signed-off-by: George Armstrong <georgea@nvidia.com>
Root cause: litellm max_retries=3 (default) compounds with inference.timeout — a single hanging request can take up to timeout * (max_retries + 1) = 120s * 4 = 480s, exceeding the 300s pytest timeout. Setting max_retries=0 ensures a timeout fails immediately without silent retries. Signed-off-by: George Armstrong <georgea@nvidia.com>
…l name Signed-off-by: George Armstrong <georgea@nvidia.com>
Signed-off-by: Igor Gitman <igitman@nvidia.com> Signed-off-by: George Armstrong <georgea@nvidia.com> Co-authored-by: George Armstrong <georgea@nvidia.com>
Summary by CodeRabbit
New Features
Tests